Defending against Adversarial Images using Basis Functions Transformations

نویسندگان

  • Uri Shaham
  • James Garritano
  • Yutaro Yamada
  • Ethan Weinberger
  • Alex Cloninger
  • Xiuyuan Cheng
  • Kelly Stanton
  • Yuval Kluger
چکیده

We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images. Specifically, we experiment with low-pass filtering, PCA, JPEG compression, low resolution wavelet approximation, and soft-thresholding. We evaluate these defense techniques using three types of popular attacks in black, gray and white-box settings. Our results show JPEG compression tends to outperform the other tested defenses in most of the settings considered, in addition to soft-thresholding, which performs well in specific cases, and yields a more mild decrease in accuracy on benign examples. In addition, we also mathematically derive a novel white-box attack in which the adversarial perturbation is composed only of terms corresponding a to predetermined subset of the basis functions, of which a “low frequency attack” is a special case.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

SATYA : Defending Against Adversarial Attacks Using Statistical Hypothesis Testing

The paper presents a new defense against adversarial attacks for deep neural networks. We demonstrate the effectiveness of our approach against the popular adversarial image generation method DeepFool. Our approach uses Wald’s Sequential Probability Ratio Test to sufficiently sample a carefully chosen neighborhood around an input image to determine the correct label of the image. On a benchmark...

متن کامل

Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

Neural networks are vulnerable to adversarial examples. This phenomenon poses a threat to their applications in security-sensitive systems. It is thus important to develop effective defending methods to strengthen the robustness of neural networks to adversarial attacks. Many techniques have been proposed, but only a few of them are validated on large datasets like the ImageNet dataset. We prop...

متن کامل

On the Suitability of Lp-norms for Creating and Preventing Adversarial Examples

Much research effort has been devoted to better understanding adversarial examples, which are specially crafted inputs to machine-learning models that are perceptually similar to benign inputs, but are classified differently (i.e., misclassified). Both algorithms that create adversarial examples and strategies for defending against them typically use Lp-norms to measure the perceptual similarit...

متن کامل

Mitigating adversarial effects through randomization

Convolutional neural networks have demonstrated their powerful ability on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. I.e., clean images, with imperceptible perturbations added, can easily cause convolutional neural networks to fail. In this paper, we propose to utilize randomization to mitigate adversarial effects. Specifically, we use two ran...

متن کامل

Countering Adversarial Images using Input Transformations

This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our exp...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2018